A Wrinkle on Satisficing Search Problems
نویسندگان
چکیده
The problem of optimally ordering the execution of independent disjuncts is explored. Only a single answer is sought, not necessarily the best one. By definition, this is called satisfying search. Since the disjuncts are independent, the total combined probability that a solution is found does not depend on the execution order. However, the ordering does affect the total expected execution time because execution ceases as soon as any solution is discovered. Therefore, the optimal ordering is the one that minimizes the total expected work. The new result is an algorithm to find this optimal ordering when the effects of executing a disjunct must be undone before another one can be tried. The algorithm is shown to have time complexity 0(n log n), where n is the number of disjuncts. This is the same complexity as for the original problem where undo times are ignored. I n t r o d u c t i o n Many investigators have examined problems of satisficing search: try the available methods one at a time until one of them satisfies the stated criteria, then stop. The objective is to find a method ordering with the least expected cost to solve the problem. Typically, only the probability of success and the expected cost are known for each method. Method i is pairwise preferred to method j if, given only these two methods, it is less expensive to try i first. Pairwise preference is transitive. Therefore, if the optimal ordering of n methods is m1m2....mn and mn+1 is added, it is merely inserted somewhere in original ordering — all original methods stay in the same position relative to one another. Below, the original problem is generalized: Associated with each method is a cost that must be paid, after trying the method, if another method is to be used. For example, the cost may be the time to undo the changes to the problem-solving state so that another method can be executed in the proper context. The pairwise preference relation is no longer transitive and the simple insertion scheme is lost for the generalized problem. However, the criteria for optimal ordering is straightforward to derive. An algorithm that finds the optimal ordering is given, and it is shown to be of the same time complexity as the one for the original problem, namely o(n log n). 1This research is supported by the Defense Advanced Research Projects Agency under Contract No. MDA903 81 C 0335. Views and conclusions contained in this report are the authors' and should not be interpreted as representing the official opinion or policy of DARPA, the U.S. Government, or any person or agency connected with them. T h e O r i g i n a l P r o b l e m A set of methods is available, each of which has the potential to solve the same given problem. The methods can be applied to the problem in any order; however, they may only be tried one at a time. If one of the methods solves the problem, the remaining untried methods need not be used. In other words, only one solution is desired or necessary, and there is no interest in extra solutions nor any other results that might be produced by method execution. The usual statement of problems in this class assumes that the probability that a particular method is successful and the execution cost of the method are independent of the order of execution and whether or not any other method is successful. Without this independence assumption, there is no general optimal ordering because the tradeoff between higher probability of success and lower expected cost is an application-dependent issue; the most general result possible then, is a partial ordering for method execution. However, with the independence assumption it follows that the total probability that at least one method will find a solution is independent of the order in which the methods are tried. Therefore, the residual problem is to determine the ordering with least expected cost. A typical example of this class is the following: Let p be the probability that method i solves the stated problem and define q = l-p.. Further, let c. be the expected cost of trying method i. For example, c = p.s + qu. where s is the expected cost when successful and u is the expected cost when unsuccessful. What is the best order in which to apply a given set of methods to find a solution with the least expected cost? The answer is simple:2 Define p. = p /c.. Apply first the method with largest p; if it fails, try the method with the next largest p, etc. The order of application among methods with the same p value is immaterial to the total expected cost of finding a solution. Two features of this result are noteworthy. First, a merit score (namely p. = p./c) can be calculated for a method independent of what other methods exist. Thus, if a new method becomes available, it can be evaluated separately and inserted into the current ordering of previously available methods with the assurance that the new ordering is optimal. Second, as a consequence, the pairwise preference ordering is transitive: Method i is preferred to method j if, given only methods i and j, the expected cost of trying i See Simon, H. A., and J. B Kadane, "Optimal Problem-Solving Search: All-orNone Solutions," Artificial Intelligence 6 (1975), 235-247 for this and other related results. J. Barnett and D. Cohen 775 776 J. Barnett and D. Cohen Another possible generalization of this class of ordering problems suggests itself; suppose the detoxification time between method i and method j is d.., i.e., d depends upon both the preceding and succeeding methods. Now the computation of the optimal ordering becomes at least as hard as the version of the traveling salesman problem where the salesman must visit each city once but does not need to return to his starting point. To see this, assume that p and c are the same for all n methods and q = 1-p is very nearly equal 1. Then the cost of an ordering j = j1...jn, where j is a permutation of the first n natural numbers is For all j, the differences are calculated and the maximal one selected in time o(n) by the algorithm described next. T h e A l g o r i t h m The algorithm in Figure 1 is written in SIMULA as the class, method_ordering. There are n methods stored in the array m. Each method has the defined attributes id (a method identifier) and p, c, and d as described above. The derived attributes of a method are q, e. and phi, where phi = p/e. The procedure, sort_on_phi, is not shown explicitly; it may be any sorting algorithm that orders m on nonincreasing values of phi in time o(n log n). If order is applied to the numerical example above, these steps occur. 1. The methods are sorted into the order {321} by their
منابع مشابه
Optimal Satisficing
Herbert Simon introduced the notion of satisficing to explain how boundedly rational agents might approach difficult sequential decision problems. His satisficing decision makers were offered as an alternative to optimizers, who have impressive computational capacities which allow them to maximize. There is no reason, however, why satisficers can not do their task optimally. In this paper, we p...
متن کاملType-Based Exploration with Multiple Search Queues for Satisficing Planning
Utilizing multiple queues in Greedy Best-First Search (GBFS) has been proven to be a very effective approach to satisficing planning. Successful techniques include extra queues based on Helpful Actions (or Preferred Operators), as well as using Multiple Heuristics. One weakness of all standard GBFS algorithms is their lack of exploration. All queues used in these methods work as priority queues...
متن کاملCost Based Satisficing Search Considered Harmful
Recently, several researchers have found that cost-based satisficing search with A* often runs into problems. Although some ”work arounds” have been proposed to ameliorate the problem, there has not been any concerted effort to pinpoint its origin. In this paper, we argue that the origins can be traced back to the wide variance in action costs that is observed in most planning domains. We show ...
متن کاملExploration and Combination: Randomized and Multi-Strategies Search in Satisficing Planning
Heuristic (Informed) Search takes advantage of problemspecific knowledge beyond the definition of the problem itself to find solutions more efficiently than uninformed search, such as Breadth-First Search (BFS) and Depth-First Search (DFS). We design domain-dependent search algorithms to plan tasks. However, the domain-dependent design pattern cannot be applied to fully automatic domains, such ...
متن کاملA Novel Technique for Avoiding Plateaus of Greedy Best-First Search in Satisficing Planning
Greedy best-first search (GBFS) is a popular and effective algorithm in satisficing planning and is incorporated into high-performance planners. GBFS in planning decides its search direction with automatically generated heuristic functions. However, if the heuristic functions evaluate nodes inaccurately, GBFS may be misled into a valueless search direction, thus resulting in performance degrada...
متن کاملCombining Heuristic Estimators for Satisficing Planning
The problem of effectively combining multiple heuristic estimators has been studied extensively in the context of optimal planning, but not in the context of satisficing planning. To narrow this gap, we empirically examine several ways of exploiting the information of multiple heuristics in a satisficing best-first search algorithm, comparing their performance in terms of coverage, plan quality...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 1983